Search Results: "ewt"

2 February 2017

Paul Wise: FLOSS Activities January 2017

Changes

Issues

Review

Administration
  • Debian: reboot 1 non-responsive VM, redirect 2 users to support channels, redirect 1 contributor to xkb upstream, redirect 1 potential contributor, redirect 1 bug reporter to mirror team, ping 7 folks about restarting processes with upgraded libs, manually restart the sectracker process due to upgraded libs, restart the package tracker process due to upgraded libs, investigate failures connecting to the XMPP service, investigate /dev/shm issue on abel.d.o, clean up after rename of the fedmsg group.
  • Debian mentors: lintian/security updates & reboot
  • Debian packages: deploy 2 contributions to the live server
  • Debian wiki: unblacklist 1 IP address, whitelist 10 email addresses, disable 18 accounts with bouncing email, update email for 2 accounts with bouncing email, reported 1 Debian member as MIA, redirect 1 user to support channels, add 4 domains to the whitelist.
  • Reproducible builds: rescheduled Debian pyxplot:amd64/unstable for themill.
  • Openmoko: security updates & reboots.

Debian derivatives
  • Send the annual activity ping mail.
  • Happy new year messages on IRC, forward to the list.
  • Note that SerbianLinux does not provide source packages.
  • Expand URL shortener on SerbianLinux page.
  • Invite PelicanHPC, Netrunner, DietPi, Hamara Linux (on IRC), BitKey to the census.
  • Add research publications link to the census template
  • Fix Symbiosis sources.list
  • Enquired about SalentOS downtime
  • Fixed and removed some 404 BlankOn links (blog, English homepage)
  • Fixed changes to AstraLinux sources.list
  • Welcome Netrunner to the census

Sponsors I renewed my support of Software Freedom Conservancy. The openchange 1:2.2-6+deb8u1 upload was sponsored by my employer. All other work was done on a volunteer basis.

11 January 2017

Reproducible builds folks: Reproducible Builds: week 89 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 1 and Saturday January 7 2017: GSoC and Outreachy updates Toolchain development Packages reviewed and fixed, and bugs filed Chris Lamb: Dhole: Reviews of unreproducible packages 13 package reviews have been added, 4 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added/updated: Upstreaming of reproducibility fixes Merged: Opened: Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: diffoscope development diffoscope 67 was uploaded to unstable by Chris Lamb. It included contributions from :
[ Chris Lamb ]
* Optimisations:
  - Avoid multiple iterations over archive by unpacking once for an ~8X
    runtime optimisation.
  - Avoid unnecessary splitting and interpolating for a ~20X optimisation
    when writing --text output.
  - Avoid expensive diff regex parsing until we need it, speeding up diff
    parsing by 2X.
  - Alias expensive Config() in diff parsing lookup for a 10% optimisation.
* Progress bar:
  - Show filenames, ELF sections, etc. in progress bar.
  - Emit JSON on the the status file descriptor output instead of a custom
    format.
* Logging:
  - Use more-Pythonic logging functions and output based on __name__, etc.
  - Use Debian-style "I:", "D:" log level format modifier.
  - Only print milliseconds in output, not microseconds.
  - Print version in debug output so that saved debug outputs can standalone
    as bug reports.
* Profiling:
  - Also report the total number of method calls, not just the total time.
  - Report on the total wall clock taken to execute diffoscope, including
    cleanup.
* Tidying:
  - Rename "NonExisting" -> "Missing".
  - Entirely rework diffoscope.comparators module, splitting as many separate
    concerns into a different utility package, tidying imports, etc.
  - Split diffoscope.difference into diffoscope.diff, etc.
  - Update file references in debian/copyright post module reorganisation.
  - Many other cleanups, etc.
* Misc:
  - Clarify comment regarding why we call python3(1) directly. Thanks to J r my
    Bobbio <lunar@debian.org>.
  - Raise a clearer error if trying to use --html-dir on a file.
  - Fix --output-empty when files are identical and no outputs specified.
[ Reiner Herrmann ]
* Extend .apk recognition regex to also match zip archives (Closes: #849638)
[ Mattia Rizzolo ]
* Follow the rename of the Debian package "python-jsbeautifier" to
  "jsbeautifier".
[ siamezzze ]
* Fixed no newline being classified as order-like difference.
reprotest development reprotest 0.5 was uploaded to unstable by Chris Lamb. It included contributions from:
[ Ximin Luo ]
* Stop advertising variations that we're not actually varying.
  That is: domain_host, shell, user_group.
* Fix auto-presets in the case of a file in the current directory.
* Allow disabling build-path variations. (Closes: #833284)
* Add a faketime variation, with NO_FAKE_STAT=1 to avoid messing with
  various buildsystems. This is on by default; if it causes your builds
  to mess up please do file a bug report.
* Add a --store-dir option to save artifacts.
Other contributions (not yet uploaded): reproducible-builds.org website development tests.reproducible-builds.org Misc. This week's edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian, reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

9 December 2016

John Goerzen: Giant Concrete Arrows, Old Maps, and Fascinated Kids

Let me set a scene for you. Two children, ages 7 and 10, are jostling for position. There s a little pushing and shoving to get the best view. This is pretty typical for siblings this age. But what, you may wonder, are they trying to see? A TV? Video game? No. Jacob and Oliver were in a library, trying to see a 98-year-old map of the property owners in Township 23, range 1 East, Harvey County, Kansas. And they were super excited about it, somewhat to the astonishment of the research librarian, who I am sure is more used to children jostling for position over the DVDs in the youth section than poring over maps in the non-circulating historical archives! All this started with giant concrete arrows in the middle of nowhere. Nearly a century ago, the US government installed a series of arrows on the ground in Kansas. These were part of a primitive air navigation system that led to the first transcontinental airmail service. Every so often, people stumble upon these abandoned arrows and there is a big discussion online. Even Snopes has had to verify their authenticity (verdict: true). Entire websites exist to tracking and locating the remnants of these arrows. And as one of the early air mail routes went through Kansas, every so often people find these arrows around here. I got the idea that it would be fun to replicate a journey along the old routes. Maybe I d spot a few old arrows and such. So I started collecting old maps: a Contract Airmail Route #34 (CAM 34) map from 1927, aviation sectionals from 1933 and 1946, etc. I noticed an odd thing on these maps: the Newton, KS airport was on the other side of the city from its present location, sometimes even several miles outside the city. What was going on? 1927 Airway Map
(1927 Airway Map) 1946 Wichita Sectional
(1946 Wichita sectional) So one foggy morning, I explained my puzzlement to the boys. I highlighted all the mysteries: were these maps correct? Were there really two Newton airports at one time? How many airports were there, and where were they? Why did they move? What was the story behind them? And I offered them the chance to be history detectives with me. And oh my goodness, were they ever excited! We had some information from a very helpful person at the Harvey County Historical Museum (thanks Kris!) So we suspected one airport at least was established in 1927. We also had a description of its location, though given in terms of township maps. So the boys and I made the short drive over to the museum. We reviewed their property maps, though they were all a little older than the time period we needed. We looked through books and at pictures. Oliver pored over a railroad map of Newton from a century ago, fascinated. Jacob was excited to discover on one map that there used to be a train track down the middle of Main Street! I was interested that the present Newton Airport was once known as Wirt Field, rather to my surprise. I somehow suspect most 2nd and 4th graders spend a lot less excited time on their research floor! Then on to the Newton Public Library to see if they d have anything more and that s when the map that produced all the excitement came out. It, by itself, didn t answer the question, but by piecing together a number of pieces of information newspaper stories, information from the museum, and the maps we were able to come up with a pretty good explanation, much to their excitement. Apparently, a man named Tangeman owned a golf course (the golf links according to the paper), and around 1927 the city of Newton purchased it, because of all the planes that were landing there. They turned it into a real airport. Later, they bought land east of the city and moved the airport there. However, during World War II, the Navy took over that location, so they built a third airport a few miles west of the city but moved back to the current east location after the Navy returned that field to them. Of course, a project like this just opens up all sorts of extra questions: why isn t it called Wirt Field anymore? What s the story of Frank Wirt? What led the Navy to take over Newton s airport? Why did planes start landing on the golf course? Where precisely was the west airport located? How long was it there? (I found an aerial photo from 1956 that looks like it may have a plane in that general area, but it seems later than I d have expected) So now I have the boys interested in going to the courthouse with me to research the property records out there. Jacob is continually astounded that we are discovering things that aren t in Wikipedia, and also excited that he could be the one to add them. To be continued, apparently!

5 December 2016

Norbert Preining: Debian/TeX Live 2016.20161130-1

As we are moving closer to the Debian release freeze, I am shipping out a new set of packages. Nothing spectacular here, just the regular updates and a security fix that was only reported internally. Add sugar and a few minor bug fixes.
texlive2016-debian I have been silent for quite some time, busy at my new job, busy with my little monster, writing papers, caring for visitors, living. I have quite a lot of things I want to write, but not enough time, so very short only this one. Enjoy. New packages awesomebox, baskervillef, forest-quickstart, gofonts, iscram, karnaugh-map, tikz-optics, tikzpeople, unicode-bidi. Updated packages acmart, algorithms, aomart, apa, apa6, appendix, apxproof, arabluatex, asymptote, background, bangorexam, beamer, beebe, biblatex-gb7714-2015, biblatex-mla, biblatex-morenames, bibtexperllibs, bidi, bookcover, bxjalipsum, bxjscls, c90, cals, cell, cm, cmap, cmextra, context, cooking-units, ctex, cyrillic, dirtree, ekaia, enotez, errata, euler, exercises, fira, fonts-churchslavonic, formation-latex-ul, german, glossaries, graphics, handout, hustthesis, hyphen-base, ipaex, japanese, jfontmaps, kpathsea, l3build, l3experimental, l3kernel, l3packages, latex2e-help-texinfo-fr, layouts, listofitems, lshort-german, manfnt, mathastext, mcf2graph, media9, mflogo, ms, multirow, newpx, newtx, nlctdoc, notes, patch, pdfscreen, phonenumbers, platex, ptex, quran, readarray, reledmac, shapes, showexpl, siunitx, talk, tcolorbox, tetex, tex4ht, texlive-en, texlive-scripts, texworks, tikz-dependency, toptesi, tpslifonts, tracklang, tugboat, tugboat-plain, units, updmap-map, uplatex, uspace, wadalab, xecjk, xellipsis, xepersian, xint.

3 November 2016

Norbert Preining: Debian/TeX Live 2016.20161103-1

This month s update falls onto a national holiday in Japan. My recent start as a normal company employee in Japan doesn t leave me enough time during normal days to work on Debian, so things have to wait for holidays. There have been a few notable changes in the current packages, and above all I wanted to fix an RC bug and on the way fixed also several other (sometimes rather old) bugs.
texlive2016-debian From the list of new packages I want to pick apxproof: I have written something myself for one of my rather long papers (with proofs about 60pp), where at times I had to factor out the proofs into an appendix. I did this my own way, but I would have preferred to have a nice package! Another interesting change is the upstream merge of collection-mathextra (which translated to the Debian package texlive-math-extra) and collection-science (Debian: texlive-science) into a new collection collection-mathscience. Since introducing new packages and phasing out old ones is generally a pain in Debian, I decided to digress from the upstream naming convention and use texlive-science for the new collection-mathscience. In the end Mathematics is the most important science of all  Finally also a word about removals: Several ConTeXt packages have been removed due to the fact that they are outdated. These removals will find their way in an update of the Debian ConTeXt package in near future. The TeX Live packages lost voss-mathmode, which was retracted by the author due to various reasons. He is working on an updated version that will hopeful reappear in both TeX Live and Debian in near future. Well, that s it for now. Here now the full list with links. Enjoy. New packages apxproof, bangorexam, biblatex-gb7714-2015, biblatex-lni, biblatex-sbl, context-cmscbf, context-cmttbf, context-inifile, context-layout, delimset, latex2nemeth, latexbangla, latex-papersize, ling-macros, notex-bst, platex-tools, testidx, uppunctlm, wtref, xcolor-material. Removed packages voss-mathmode. Updated packages apa6, autoaligne, babel-german, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-manuscripts-philology, biblatex-nature, biblatex-realauthor, bibtex, bidi, boondox, bxcjkjatype, chickenize, churchslavonic, cjk-gs-integrate, context-filter, cooking-units, ctex, denisbdoc, dvips, europasscv, fixme, glossaries, gzt, handout, imakeidx, ipaex-type1, jsclasses, jslectureplanner, kpathsea, l3build, l3experimental, l3kernel, l3packages, latexindent, latexmk, listofitems, luatexja, marginnote, mcf2graph, minted, multirow, nameauth, newpx, newtx, noto, nucleardata, optidef, overlays, pdflatexpicscale, pst-eucl, reledmac, repere, scanpages, semantic-markup, tableaux, tcolorbox, tetex, texlive-scripts, ticket, todonotes, tracklang, tudscr, turabian-formatting, updmap-map, uspace, visualtikz, xassoccnt, xecjk, yathesis.

28 October 2016

Alessio Treglia: The logical contradictions of the Universe

Ouroboros

Ouroboros

Is Erwin Schr dinger s wave function which did in the atomic and subatomic world an operation altogether similar to the one performed by Newton in the macroscopic world an objective reality or just a subjective knowledge? Physicists, philosophers and epistemologist have debated at length on this matter. In 1960, theoretical physicist Eugene Wigner has proposed that the observer s consciousness is the dividing line that triggers the collapse of the wave function[1], and this theory was later taken up and developed in recent years. The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse [2]. The English mathematical physicist and philosopher of science Roger Penrose developed the hypothesis called Orch-OR (Orchestrated objective reduction) according to which consciousness originates from processes within neurons, rather than from the connections between neurons (the conventional view). The mechanism is believed to be a quantum physical process called objective reduction which is orchestrated by the molecular structures of the microtubules of brain cells (which constitute the cytoskeleton of the cells themselves). Together with the physician Stuart Hameroff, Penrose has suggested a direct relationship between the quantum vibrations of microtubules and the formation of consciousness.

<Read More [by Fabio Marzocca]>

26 October 2016

Joachim Breitner: Showcasing Applicative

My plan for this week s lecture of the CIS 194 Haskell course at the University of Pennsylvania is to dwell a bit on the concept of Functor, Applicative and Monad, and to highlight the value of the Applicative abstraction. I quite like the example that I came up with, so I want to share it here. In the interest of long-term archival and stand-alone presentation, I include all the material in this post.1

Imports In case you want to follow along, start with these imports:
import Data.Char
import Data.Maybe
import Data.List
import System.Environment
import System.IO
import System.Exit

The parser The starting point for this exercise is a fairly standard parser-combinator monad, which happens to be the result of the student s homework from last week:
newtype Parser a = P (String -> Maybe (a, String))
runParser :: Parser t -> String -> Maybe (t, String)
runParser (P p) = p
parse :: Parser a -> String -> Maybe a
parse p input = case runParser p input of
    Just (result, "") -> Just result
    _ -> Nothing -- handles both no result and leftover input
noParserP :: Parser a
noParserP = P (\_ -> Nothing)
pureParserP :: a -> Parser a
pureParserP x = P (\input -> Just (x,input))
instance Functor Parser where
    fmap f p = P $ \input -> do
	(x, rest) <- runParser p input
	return (f x, rest)
instance Applicative Parser where
    pure = pureParserP
    p1 <*> p2 = P $ \input -> do
        (f, rest1) <- runParser p1 input
        (x, rest2) <- runParser p2 rest1
        return (f x, rest2)
instance Monad Parser where
    return = pure
    p1 >>= k = P $ \input -> do
        (x, rest1) <- runParser p1 input
        runParser (k x) rest1
anyCharP :: Parser Char
anyCharP = P $ \input -> case input of
    (c:rest) -> Just (c, rest)
    []       -> Nothing
charP :: Char -> Parser ()
charP c = do
    c' <- anyCharP
    if c == c' then return ()
               else noParserP
anyCharButP :: Char -> Parser Char
anyCharButP c = do
    c' <- anyCharP
    if c /= c' then return c'
               else noParserP
letterOrDigitP :: Parser Char
letterOrDigitP = do
    c <- anyCharP
    if isAlphaNum c then return c else noParserP
orElseP :: Parser a -> Parser a -> Parser a
orElseP p1 p2 = P $ \input -> case runParser p1 input of
    Just r -> Just r
    Nothing -> runParser p2 input
manyP :: Parser a -> Parser [a]
manyP p = (pure (:) <*> p <*> manyP p)  orElseP  pure []
many1P :: Parser a -> Parser [a]
many1P p = pure (:) <*> p <*> manyP p
sepByP :: Parser a -> Parser () -> Parser [a]
sepByP p1 p2 = (pure (:) <*> p1 <*> (manyP (p2 *> p1)))  orElseP  pure []
A parser using this library for, for example, CSV files could take this form:
parseCSVP :: Parser [[String]]
parseCSVP = manyP parseLine
  where
    parseLine = parseCell  sepByP  charP ',' <* charP '\n'
    parseCell = do
        charP '"'
        content <- manyP (anyCharButP '"')
        charP '"'
        return content

We want EBNF Often when we write a parser for a file format, we might also want to have a formal specification of the format. A common form for such a specification is EBNF. This might look as follows, for a CSV file:
cell = '"',  not-quote , '"';
line = (cell,  ',', cell    ''), newline;
csv  =  line ;
It is straightforward to create a Haskell data type to represent an EBNF syntax description. Here is a simple EBNF library (data type and pretty-printer) for your convenience:
data RHS
  = Terminal String
    NonTerminal String
    Choice RHS RHS
    Sequence RHS RHS
    Optional RHS
    Repetition RHS
  deriving (Show, Eq)
ppRHS :: RHS -> String
ppRHS = go 0
  where
    go _ (Terminal s)     = surround "'" "'" $ concatMap quote s
    go _ (NonTerminal s)  = s
    go a (Choice x1 x2)   = p a 1 $ go 1 x1 ++ "   " ++ go 1 x2
    go a (Sequence x1 x2) = p a 2 $ go 2 x1 ++ ", "  ++ go 2 x2
    go _ (Optional x)     = surround "[" "]" $ go 0 x
    go _ (Repetition x)   = surround " " " " $ go 0 x
    surround c1 c2 x = c1 ++ x ++ c2
    p a n   a > n     = surround "(" ")"
            otherwise = id
    quote '\'' = "\\'"
    quote '\\' = "\\\\"
    quote c    = [c]
type Production = (String, RHS)
type BNF = [Production]
ppBNF :: BNF -> String
ppBNF = unlines . map (\(i,rhs) -> i ++ " = " ++ ppRHS rhs ++ ";")

Code to produce EBNF We had a good time writing combinators that create complex parsers from primitive pieces. Let us do the same for EBNF grammars. We could simply work on the RHS type directly, but we can do something more nifty: We create a data type that keeps track, via a phantom type parameter, of what Haskell type the given EBNF syntax is the specification:
newtype Grammar a = G RHS
ppGrammar :: Grammar a -> String
ppGrammar (G rhs) = ppRHS rhs
So a value of type Grammar t is a description of the textual representation of the Haskell type t. Here is one simple example:
anyCharG :: Grammar Char
anyCharG = G (NonTerminal "char")
Here is another one. This one does not describe any interesting Haskell type, but is useful when spelling out the special characters in the syntax described by the grammar:
charG :: Char -> Grammar ()
charG c = G (Terminal [c])
A combinator that creates new grammar from two existing grammars:
orElseG :: Grammar a -> Grammar a -> Grammar a
orElseG (G rhs1) (G rhs2) = G (Choice rhs1 rhs2)
We want the convenience of our well-known type classes in order to combine these values some more:
instance Functor Grammar where
    fmap _ (G rhs) = G rhs
instance Applicative Grammar where
    pure x = G (Terminal "")
    (G rhs1) <*> (G rhs2) = G (Sequence rhs1 rhs2)
Note how the Functor instance does not actually use the function. How should it? There are no values inside a Grammar! We cannot define a Monad instance for Grammar: We would start with (G rhs1) >>= k = , but there is simply no way of getting a value of type a that we can feed to k. So we will do without a Monad instance. This is interesting, and we will come back to that later. Like with the parser, we can now begin to build on the primitive example to build more complicated combinators:
manyG :: Grammar a -> Grammar [a]
manyG p = (pure (:) <*> p <*> manyG p)  orElseG  pure []
many1G :: Grammar a -> Grammar [a]
many1G p = pure (:) <*> p <*> manyG p
sepByG :: Grammar a -> Grammar () -> Grammar [a]
sepByG p1 p2 = ((:) <$> p1 <*> (manyG (p2 *> p1)))  orElseG  pure []
Let us run a small example:
dottedWordsG :: Grammar [String]
dottedWordsG = many1G (manyG anyCharG <* charG '.')
*Main> putStrLn $ ppGrammar dottedWordsG
'', ('', char, ('', char, ('', char, ('', char, ('', char, ('',  
Oh my, that is not good. Looks like the recursion in manyG does not work well, so we need to avoid that. But anyways we want to be explicit in the EBNF grammars about where something can be repeated, so let us just make many a primitive:
manyG :: Grammar a -> Grammar [a]
manyG (G rhs) = G (Repetition rhs)
With this definition, we already get a simple grammar for dottedWordsG:
*Main> putStrLn $ ppGrammar dottedWordsG
'',  char , '.',  char , '.' 
This already looks like a proper EBNF grammar. One thing that is not nice about it is that there is an empty string ('') in a sequence ( , ). We do not want that. Why is it there in the first place? Because our Applicative instance is not lawful! Remember that pure id <*> g == g should hold. One way to achieve that is to improve the Applicative instance to optimize this case away:
instance Applicative Grammar where
    pure x = G (Terminal "")
    G (Terminal "") <*> G rhs2 = G rhs2
    G rhs1 <*> G (Terminal "") = G rhs1
    (G rhs1) <*> (G rhs2) = G (Sequence rhs1 rhs2)
	 
Now we get what we want:
*Main> putStrLn $ ppGrammar dottedWordsG
 char , '.',  char , '.' 
Remember our parser for CSV files above? Let me repeat it here, this time using only Applicative combinators, i.e. avoiding (>>=), (>>), return and do-notation:
parseCSVP :: Parser [[String]]
parseCSVP = manyP parseLine
  where
    parseLine = parseCell  sepByP  charG ',' <* charP '\n'
    parseCell = charP '"' *> manyP (anyCharButP '"') <* charP '"'
And now we try to rewrite the code to produce Grammar instead of Parser. This is straightforward with the exception of anyCharButP. The parser code for that inherently monadic, and we just do not have a monad instance. So we work around the issue by making that a primitive grammar, i.e. introducing a non-terminal in the EBNF without a production rule pretty much like we did for anyCharG:
primitiveG :: String -> Grammar a
primitiveG s = G (NonTerminal s)
parseCSVG :: Grammar [[String]]
parseCSVG = manyG parseLine
  where
    parseLine = parseCell  sepByG  charG ',' <* charG '\n'
    parseCell = charG '"' *> manyG (primitiveG "not-quote") <* charG '"'
Of course the names parse are not quite right any more, but let us just leave that for now. Here is the result:
*Main> putStrLn $ ppGrammar parseCSVG
 ('"',  not-quote , '"',  ',', '"',  not-quote , '"'    ''), '
' 
The line break is weird. We do not really want newlines in the grammar. So let us make that primitive as well, and replace charG '\n' with newlineG:
newlineG :: Grammar ()
newlineG = primitiveG "newline"
Now we get
*Main> putStrLn $ ppGrammar parseCSVG
 ('"',  not-quote , '"',  ',', '"',  not-quote , '"'    ''), newline 
which is nice and correct, but still not quite the easily readable EBNF that we saw further up.

Code to produce EBNF, with productions We currently let our grammars produce only the right-hand side of one EBNF production, but really, we want to produce a RHS that may refer to other productions. So let us change the type accordingly:
newtype Grammar a = G (BNF, RHS)
runGrammer :: String -> Grammar a -> BNF
runGrammer main (G (prods, rhs)) = prods ++ [(main, rhs)]
ppGrammar :: String -> Grammar a -> String
ppGrammar main g = ppBNF $ runGrammer main g
Now we have to adjust all our primitive combinators (but not the derived ones!):
charG :: Char -> Grammar ()
charG c = G ([], Terminal [c])
anyCharG :: Grammar Char
anyCharG = G ([], NonTerminal "char")
manyG :: Grammar a -> Grammar [a]
manyG (G (prods, rhs)) = G (prods, Repetition rhs)
mergeProds :: [Production] -> [Production] -> [Production]
mergeProds prods1 prods2 = nub $ prods1 ++ prods2
orElseG :: Grammar a -> Grammar a -> Grammar a
orElseG (G (prods1, rhs1)) (G (prods2, rhs2))
    = G (mergeProds prods1 prods2, Choice rhs1 rhs2)
instance Functor Grammar where
    fmap _ (G bnf) = G bnf
instance Applicative Grammar where
    pure x = G ([], Terminal "")
    G (prods1, Terminal "") <*> G (prods2, rhs2)
        = G (mergeProds prods1 prods2, rhs2)
    G (prods1, rhs1) <*> G (prods2, Terminal "")
        = G (mergeProds prods1 prods2, rhs1)
    G (prods1, rhs1) <*> G (prods2, rhs2)
        = G (mergeProds prods1 prods2, Sequence rhs1 rhs2)
primitiveG :: String -> Grammar a
primitiveG s = G (NonTerminal s)
The use of nub when combining productions removes duplicates that might be used in different parts of the grammar. Not efficient, but good enough for now. Did we gain anything? Not yet:
*Main> putStr $ ppGrammar "csv" (parseCSVG)
csv =  ('"',  not-quote , '"',  ',', '"',  not-quote , '"'    ''), newline ;
But we can now introduce a function that lets us tell the system where to give names to a piece of grammar:
nonTerminal :: String -> Grammar a -> Grammar a
nonTerminal name (G (prods, rhs))
  = G (prods ++ [(name, rhs)], NonTerminal name)
Ample use of this in parseCSVG yields the desired result:
parseCSVG :: Grammar [[String]]
parseCSVG = manyG parseLine
  where
    parseLine = nonTerminal "line" $
        parseCell  sepByG  charG ',' <* newline
    parseCell = nonTerminal "cell" $
        charG '"' *> manyG (primitiveG "not-quote") <* charG '"
*Main> putStr $ ppGrammar "csv" (parseCSVG)
cell = '"',  not-quote , '"';
line = (cell,  ',', cell    ''), newline;
csv =  line ;
This is great!

Unifying parsing and grammar-generating Note how simliar parseCSVG and parseCSVP are! Would it not be great if we could implement that functionality only once, and get both a parser and a grammar description out of it? This way, the two would never be out of sync! And surely this must be possible. The tool to reach for is of course to define a type class that abstracts over the parts where Parser and Grammer differ. So we have to identify all functions that are primitive in one of the two worlds, and turn them into type class methods. This includes char and orElse. It includes many, too: Although manyP is not primitive, manyG is. It also includes nonTerminal, which does not exist in the world of parsers (yet), but we need it for the grammars. The primitiveG function is tricky. We use it in grammars when the code that we might use while parsing is not expressible as a grammar. So the solution is to let it take two arguments: A String, when used as a descriptive non-terminal in a grammar, and a Parser a, used in the parsing code. Finally, the type classes that we except, Applicative (and thus Functor), are added as constraints on our type class:
class Applicative f => Descr f where
    char :: Char -> f ()
    many :: f a -> f [a]
    orElse :: f a -> f a -> f a
    primitive :: String -> Parser a -> f a
    nonTerminal :: String -> f a -> f a
The instances are easily written:
instance Descr Parser where
    char = charP
    many = manyP
    orElse = orElseP
    primitive _ p = p
    nonTerminal _ p = p
instance Descr Grammar where
    char = charG
    many = manyG
    orElse = orElseG
    primitive s _ = primitiveG s
    nonTerminal s g = nonTerminal s g
And we can now take the derived definitions, of which so far we had two copies, and define them once and for all:
many1 :: Descr f => f a -> f [a]
many1 p = pure (:) <*> p <*> many p
anyChar :: Descr f => f Char
anyChar = primitive "char" anyCharP
dottedWords :: Descr f => f [String]
dottedWords = many1 (many anyChar <* char '.')
sepBy :: Descr f => f a -> f () -> f [a]
sepBy p1 p2 = ((:) <$> p1 <*> (many (p2 *> p1)))  orElse  pure []
newline :: Descr f => f ()
newline = primitive "newline" (charP '\n')
And thus we now have our CSV parser/grammar generator:
parseCSV :: Descr f => f [[String]]
parseCSV = many parseLine
  where
    parseLine = nonTerminal "line" $
        parseCell  sepBy  char ',' <* newline
    parseCell = nonTerminal "cell" $
        char '"' *> many (primitive "not-quote" (anyCharButP '"')) <* char '"'
We can now use this definition both to parse and to generate grammars:
*Main> putStr $ ppGrammar2 "csv" (parseCSV)
cell = '"',  not-quote , '"';
line = (cell,  ',', cell    ''), newline;
csv =  line ;
*Main> parse parseCSV "\"ab\",\"cd\"\n\"\",\"de\"\n\n"
Just [["ab","cd"],["","de"],[]]

The INI file parser and grammar As a final exercise, let us transform the INI file parser into a combined thing. Here is the parser (another artifact of last week s homework) again using applicative style2:
parseINIP :: Parser INIFile
parseINIP = many1P parseSection
  where
    parseSection =
        (,) <$  charP '['
            <*> parseIdent
            <*  charP ']'
            <*  charP '\n'
            <*> (catMaybes <$> manyP parseLine)
    parseIdent = many1P letterOrDigitP
    parseLine = parseDecl  orElseP  parseComment  orElseP  parseEmpty
    parseDecl = Just <$> (
        (,) <*> parseIdent
            <*  manyP (charP ' ')
            <*  charP '='
            <*  manyP (charP ' ')
            <*> many1P (anyCharButP '\n')
            <*  charP '\n')
    parseComment =
        Nothing <$ charP '#'
                <* many1P (anyCharButP '\n')
                <* charP '\n'
    parseEmpty = Nothing <$ charP '\n'
Transforming that to a generic description is quite straightforward. We use primitive again to wrap letterOrDigitP:
descrINI :: Descr f => f INIFile
descrINI = many1 parseSection
  where
    parseSection =
        (,) <*  char '['
            <*> parseIdent
            <*  char ']'
            <*  newline
            <*> (catMaybes <$> many parseLine)
    parseIdent = many1 (primitive "alphanum" letterOrDigitP)
    parseLine = parseDecl  orElse  parseComment  orElse  parseEmpty
    parseDecl = Just <$> (
        (,) <*> parseIdent
            <*  many (char ' ')
            <*  char '='
            <*  many (char ' ')
            <*> many1 (primitive "non-newline" (anyCharButP '\n'))
	    <*  newline)
    parseComment =
        Nothing <$ char '#'
                <* many1 (primitive "non-newline" (anyCharButP '\n'))
		<* newline
    parseEmpty = Nothing <$ newline
This yields this not very helpful grammar (abbreviated here):
*Main> putStr $ ppGrammar2 "ini" descrINI
ini = '[', alphanum,  alphanum , ']', newline,  alphanum,  alphanum ,  ' ' 
But with a few uses of nonTerminal, we get something really nice:
descrINI :: Descr f => f INIFile
descrINI = many1 parseSection
  where
    parseSection = nonTerminal "section" $
        (,) <$  char '['
            <*> parseIdent
            <*  char ']'
            <*  newline
            <*> (catMaybes <$> many parseLine)
    parseIdent = nonTerminal "identifier" $
        many1 (primitive "alphanum" letterOrDigitP)
    parseLine = nonTerminal "line" $
        parseDecl  orElse  parseComment  orElse  parseEmpty
    parseDecl = nonTerminal "declaration" $ Just <$> (
        (,) <$> parseIdent
            <*  spaces
            <*  char '='
            <*  spaces
            <*> remainder)
    parseComment = nonTerminal "comment" $
        Nothing <$ char '#' <* remainder
    remainder = nonTerminal "line-remainder" $
        many1 (primitive "non-newline" (anyCharButP '\n')) <* newline
    parseEmpty = Nothing <$ newline
    spaces = nonTerminal "spaces" $ many (char ' ')
*Main> putStr $ ppGrammar "ini" descrINI
identifier = alphanum,  alphanum ;
spaces =  ' ' ;
line-remainder = non-newline,  non-newline , newline;
declaration = identifier, spaces, '=', spaces, line-remainder;
comment = '#', line-remainder;
line = declaration   comment   newline;
section = '[', identifier, ']', newline,  line ;
ini = section,  section ;

Recursion (variant 1) What if we want to write a parser/grammar-generator that is able to generate the following grammar, which describes terms that are additions and multiplications of natural numbers:
const = digit,  digit ;
spaces =  ' '   newline ;
atom = const   '(', spaces, expr, spaces, ')', spaces;
mult = atom,  spaces, '*', spaces, atom , spaces;
plus = mult,  spaces, '+', spaces, mult , spaces;
expr = plus;
The production of expr is recursive (via plus, mult, atom). We have seen above that simply defining a Grammar a recursively does not go well. One solution is to add a new combinator for explicit recursion, which replaces nonTerminal in the method:
class Applicative f => Descr f where
     
    recNonTerminal :: String -> (f a -> f a) -> f a
instance Descr Parser where
     
    recNonTerminal _ p = let r = p r in r
instance Descr Grammar where
     
    recNonTerminal = recNonTerminalG
recNonTerminalG :: String -> (Grammar a -> Grammar a) -> Grammar a
recNonTerminalG name f =
    let G (prods, rhs) = f (G ([], NonTerminal name))
    in G (prods ++ [(name, rhs)], NonTerminal name)
nonTerminal :: Descr f => String -> f a -> f a
nonTerminal name p = recNonTerminal name (const p)
runGrammer :: String -> Grammar a -> BNF
runGrammer main (G (prods, NonTerminal nt))   main == nt = prods
runGrammer main (G (prods, rhs)) = prods ++ [(main, rhs)]
The change in runGrammer avoids adding a pointless expr = expr production to the output. This lets us define a parser/grammar-generator for the arithmetic expressions given above:
data Expr = Plus Expr Expr   Mult Expr Expr   Const Integer
    deriving Show
mkPlus :: Expr -> [Expr] -> Expr
mkPlus = foldl Plus
mkMult :: Expr -> [Expr] -> Expr
mkMult = foldl Mult
parseExpr :: Descr f => f Expr
parseExpr = recNonTerminal "expr" $ \ exp ->
    ePlus exp
ePlus :: Descr f => f Expr -> f Expr
ePlus exp = nonTerminal "plus" $
    mkPlus <$> eMult exp
           <*> many (spaces *> char '+' *> spaces *> eMult exp)
           <*  spaces
eMult :: Descr f => f Expr -> f Expr
eMult exp = nonTerminal "mult" $
    mkPlus <$> eAtom exp
           <*> many (spaces *> char '*' *> spaces *> eAtom exp)
           <*  spaces
eAtom :: Descr f => f Expr -> f Expr
eAtom exp = nonTerminal "atom" $
    aConst  orElse  eParens exp
aConst :: Descr f => f Expr
aConst = nonTerminal "const" $ Const . read <$> many1 digit
eParens :: Descr f => f a -> f a
eParens inner =
    id <$  char '('
       <*  spaces
       <*> inner
       <*  spaces
       <*  char ')'
       <*  spaces
And indeed, this works:
*Main> putStr $ ppGrammar "expr" parseExpr
const = digit,  digit ;
spaces =  ' '   newline ;
atom = const   '(', spaces, expr, spaces, ')', spaces;
mult = atom,  spaces, '*', spaces, atom , spaces;
plus = mult,  spaces, '+', spaces, mult , spaces;
expr = plus;

Recursion (variant 2) Interestingly, there is another solution to this problem, which avoids introducing recNonTerminal and explicitly passing around the recursive call (i.e. the exp in the example). To implement that we have to adjust our Grammar type as follows:
newtype Grammar a = G ([String] -> (BNF, RHS))
The idea is that the list of strings is those non-terminals that we are currently defining. So in nonTerminal, we check if the non-terminal to be introduced is currently in the process of being defined, and then simply ignore the body. This way, the recursion is stopped automatically:
nonTerminalG :: String -> (Grammar a) -> Grammar a
nonTerminalG name (G g) = G $ \seen ->
    if name  elem  seen
    then ([], NonTerminal name)
    else let (prods, rhs) = g (name : seen)
         in (prods ++ [(name, rhs)], NonTerminal name)
After adjusting the other primitives of Grammar (including the Functor and Applicative instances, wich now again have nonTerminal) to type-check again, we observe that this parser/grammar generator for expressions, with genuine recursion, works now:
parseExp :: Descr f => f Expr
parseExp = nonTerminal "expr" $
    ePlus
ePlus :: Descr f => f Expr
ePlus = nonTerminal "plus" $
    mkPlus <$> eMult
           <*> many (spaces *> char '+' *> spaces *> eMult)
           <*  spaces
eMult :: Descr f => f Expr
eMult = nonTerminal "mult" $
    mkPlus <$> eAtom
           <*> many (spaces *> char '*' *> spaces *> eAtom)
           <*  spaces
eAtom :: Descr f => f Expr
eAtom = nonTerminal "atom" $
    aConst  orElse  eParens parseExp
Note that the recursion is only going to work if there is at least one call to nonTerminal somewhere around the recursive calls. We still cannot implement many as naively as above.

Homework If you want to play more with this: The homework is to define a parser/grammar-generator for EBNF itself, as specified in this variant:
identifier = letter,  letter   digit   '-' ;
spaces =  ' '   newline ;
quoted-char = non-quote-or-backslash   '\\', '\\'   '\\', '\'';
terminal = '\'',  quoted-char , '\'', spaces;
non-terminal = identifier, spaces;
option = '[', spaces, rhs, spaces, ']', spaces;
repetition = ' ', spaces, rhs, spaces, ' ', spaces;
group = '(', spaces, rhs, spaces, ')', spaces;
atom = terminal   non-terminal   option   repetition   group;
sequence = atom,  spaces, ',', spaces, atom , spaces;
choice = sequence,  spaces, ' ', spaces, sequence , spaces;
rhs = choice;
production = identifier, spaces, '=', spaces, rhs, ';', spaces;
bnf = production,  production ;
This grammar is set up so that the precedence of , and is correctly implemented: a , b c will parse as (a, b) c. In this syntax for BNF, terminal characters are quoted, i.e. inside ' ', a ' is replaced by \' and a \ is replaced by \\ this is done by the function quote in ppRHS. If you do this, you should able to round-trip with the pretty-printer, i.e. parse back what it wrote:
*Main> let bnf1 = runGrammer "expr" parseExpr
*Main> let bnf2 = runGrammer "expr" parseBNF
*Main> let f = Data.Maybe.fromJust . parse parseBNF. ppBNF
*Main> f bnf1 == bnf1
True
*Main> f bnf2 == bnf2
True
The last line is quite meta: We are using parseBNF as a parser on the pretty-printed grammar produced from interpreting parseBNF as a grammar.

Conclusion We have again seen an example of the excellent support for abstraction in Haskell: Being able to define so very different things such as a parser and a grammar description with the same code is great. Type classes helped us here. Note that it was crucial that our combined parser/grammars are only able to use the methods of Applicative, and not Monad. Applicative is less powerful, so by giving less power to the user of our Descr interface, the other side, i.e. the implementation, can be more powerful. The reason why Applicative is ok, but Monad is not, is that in Applicative, the results do not affect the shape of the computation, whereas in Monad, the whole point of the bind operator (>>=) is that the result of the computation is used to decide the next computation. And while this is perfectly fine for a parser, it just makes no sense for a grammar generator, where there simply are no values around! We have also seen that a phantom type, namely the parameter of Grammar, can be useful, as it lets the type system make sure we do not write nonsense. For example, the type of orElseG ensures that both grammars that are combined here indeed describe something of the same type.

  1. It seems to be the week of applicative-appraising blog posts: Brent has posted a nice piece about enumerations using Applicative yesterday.
  2. I like how in this alignment of <*> and <* the > point out where the arguments are that are being passed to the function on the left.

22 October 2016

Ingo Juergensmann: Automatically update TLSA records on new Letsencrypt Certs

I've been using DNSSEC for some quite time now and it is working quite well. When LetsEncrypt went public beta I jumped on the train and migrated many services to LE-based TLS. However there was still one small problem with LE certs: When there is a new cert, all of the old TLSA resource records are not valid anymore and might give problems to strict DNSSEC checking clients. It took some while until my pain was big enough to finally fix it by some scripts. There are at least two scripts involved: 1) dnssec.sh
This script does all of my DNSSEC handling. You can just do a "dnssec.sh enable-dnssec domain.tld" and everything is configured so that you only need to copy the appropriate keys into the webinterface of your DNS registry.
host:~/bin# dnssec.sh
No parameter given.
Usage: dnsec.sh MODE DOMAIN
MODE can be one of the following:
enable-dnssec : perform all steps to enable DNSSEC for your domain
edit-zone     : safely edit your zone after enabling DNSSEC
create-dnskey : create new dnskey only
load-dnskey   : loads new dnskeys and signs the zone with them
show-ds       : shows DS records of zone
zoneadd-ds    : adds DS records to the zone file
show-dnskey   : extract DNSKEY record that needs to uploaded to your registrar
update-tlsa   : update TLSA records with new TLSA hash, needs old and new TLSA hashes as additional parameters
For updating zone-files just do a "dnssech.sh edit-zone domain.tld" to add new records and such and the script will take care e.g. of increasing the serial of the zone file. I find this very convenient, so I often use this script for non-DNSSEC-enabled domains as well. However you can spot the command line option "update-tlsa". This option needs the old and the new TLSA hashes beside the domain.tld parameter. However, this option is used from the second script: 2) check_tlsa.sh
This is a quite simple Bash script that parses the domains.txt from letsencrypt.sh script, looking up the old TLSA hash in the zone files (structured in TLD/domain.tld directories), compare the old with the new hash (by invoking tlsagen.sh) and if there is a difference in hashes, call dnssec.sh with the proper parameters:
#!/bin/bash
set -e
LEPATH="/etc/letsencrypt.sh"
for i in  cat /etc/letsencrypt.sh/domains.txt   awk ' print $1 '  ; do
  domain= echo $i   awk 'BEGIN  FS="."  ;  print $(NF-1)"."$NF ' 
  #echo -n "Domain: $domain"
  TLD= echo $i   awk 'BEGIN  FS="." ;  print $NF ' 
  #echo ", TLD: $TLD"
  OLDTLSA= grep -i "in.*tlsa" /etc/bind/$ TLD /$ domain    grep $ i    head -n 1   awk ' print $NF ' 
  if [ -n "$ OLDTLSA " ] ; then
  #echo "--> $ OLDTLSA "
  # Usage: tlsagen.sh cert.pem host[:port] usage selector mtype
  NEWTLSA= /path/to/tlsagen.sh $LEPATH/certs/$ i /fullchain.pem $ i  3 1 1   awk ' print $NF ' 
  #echo "==> $NEWTLSA"
  if [ "$ OLDTLSA " != "$ NEWTLSA " ] ; then
  /path/to/dnssec.sh update-tlsa $ domain  $ OLDTLSA  $ NEWTLSA  > /dev/null
  echo "TLSA RR update for $ i "
  fi
  fi
done
So, quite simple and obviously a quick hack. For sure someone else can write a cleaner and more sophisticated implementation to do the same stuff, but at least it works for meTM. Use it on your own risk and do whatever you want with these scripts (licensed under public domain). You can invoke check_tlsa.sh right after your crontab call for letsencrypt.sh. In a more sophisticated way it should be fairly easy to invoke these scripts from letsencrypt.sh post hooks as well.
Please find the files attached to this page (remove the .txt extension after saving, of course).
AttachmentSize
check_tlsa.sh.txt812 bytes
dnssec.sh.txt3.88 KB
Kategorie:

18 October 2016

MJ Ray: Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I m still reading some of the planets where this blog post should appear and commenting on some, so I ve not felt completely cut off, but I am surprised how many people don t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like). The main motive for this post is to test some minor upgrades, though. Hi everyone. How s it going with you? I ll probably keep posting short updates in the future. Go in peace to love and serve the web.

16 October 2016

Thomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

OpenStack Newton is released, and uploaded to Sid OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn t disrupt Sid users too much, but 38 packages wouldn t build without it. Thanks to Santiago Vila for pointing at the issue here. As of writing, a lot of the Newton packages didn t migrate to Testing yet. It s been migrating in a very messy way. I d love to improve this process, but I m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome. Bye bye Jenkins For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton. Current status As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest. Goodies from Gerrit and upstream CI/CD It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome. The upstream infra: nodepool, zuul and friends
The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would. How it works All source packages are stored in Gerrit with the deb- prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under jessie-newton , and one for the automatic backports, maintained in the deb-auto-backports gerrit repository. We re using a git tag based workflow. Every Gerrit repository contains all of the upstream branch, plus a debian/newton branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using git archive , then used by sbuild to produce binaries. To package a new upstream release, one simply needs to git merge -X theirs FOO (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do git commit -a amend , and simply git review . At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the merge commit , the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository. Maintaining backports automatically The automatic backports is maintained through a Gerrit repository called deb-auto-backports containing a packages-list file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg compare-version magic, the packages-list is used to compare what s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the packages-list file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we ll try to never use it, and rebuild packages when possible). The nice thing with this system, is that we don t need to care much about maintaining packages up-to-date: the script does that for us. Upstream Debian repository are NOT for production The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable. Improving the build infrastructure There s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do. Generalizing to Debian During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with git review . This system, however, currently only fits the git tag based packaging workflow. We d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push). Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

10 October 2016

Daniel Pocock: DVD-based Clean Room for PGP and PKI

There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails. There are also more home networks and small offices who require their own in-house Certificate Authority (CA) to issue TLS certificates for VPN users (e.g. StrongSWAN) or IP telephony. Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian. Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with. Trying it out in VirtualBox It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside: At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL. I've been trying it out with an SPR-532, one of the GnuPG-supported smartcard readers with a pin-pad and the OpenPGP card. Ready to use today More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson. The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline. Getting involved To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools. If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it. One way to proceed may be to recruit an Outreachy or GSoC intern to develop the UI. Before they can get started, it would be necessary to more thoroughly document workflow requirements.

8 October 2016

Norbert Preining: Debian/TeX update October 2016: all of TeX Live and Biber 2.6

Finally a new update of many TeX related packages: all the texlive-* including the binary packages, and biber have been updated to the latest release. This upload was delayed by my travels around the world, as well as the necessity to package a new Perl module (libdatetime-calendar-julian-perl) as required by new Biber. Also, my new job leaves me only the weekends for packaging. Anyway, the packages are now uploaded and should appear soon on your friendly local server. texlive2016-debian There are several highlights: The binaries have been patched with several upstream fixes (tex4ht and XeTeX compatibility, as well as various Japanese TeX engine fixes), updated biber and biblatex, and as usual loads of new and updated packages. Last but not least I want to thank one particular author: His package was removed from TeX Live due to the addition of a rather unusual clause in the license. Instead of simply uploading new packages to Debian with the rather important removed, I contacted the author and asked for clarification. And to my great pleasure he immediately answered with an update of the package with fixed license. All of us user of these many packages should be grateful to the authors of the packages who invest loads of their free time into supporting our community. Thanks! Enough now, here as usual the list of new and updated packages with links to their respective CTAN pages. Enjoy. New packages addfont, apalike-german, autoaligne, baekmuk, beamerswitch, beamertheme-cuerna, beuron, biblatex-claves, biolett-bst, cooking-units, cstypo, emf, eulerpx, filecontentsdef, frederika2016, grant, latexgit, listofitems, overlays, phonenumbers, pst-arrow, quicktype, revquantum, richtext, semantic-markup, spalign, texproposal, tikz-page, unfonts-core, unfonts-extra, uspace. Updated packages achemso, acmart, acro, adobemapping, alegreya, allrunes, animate, arabluatex, archaeologie, asymptote, attachfile, babel-greek, bangorcsthesis, beebe, biblatex, biblatex-anonymous, biblatex-apa, biblatex-bookinother, biblatex-chem, biblatex-fiwi, biblatex-gost, biblatex-ieee, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-phys, biblatex-realauthor, biblatex-science, biblatex-true-citepages-omit, bibleref, bidi, chemformula, circuitikz, cochineal, colorspace, comment, covington, cquthesis, ctex, drawmatrix, ejpecp, erewhon, etoc, exsheets, fancyhdr, fei, fithesis, footnotehyper, fvextra, geschichtsfrkl, gnuplottex, gost, gregoriotex, hausarbeit-jura, ijsra, ipaex, jfontmaps, jsclasses, jslectureplanner, latexdiff, leadsheets, libertinust1math, luatexja, markdown, mcf2graph, minutes, multirow, mynsfc, nameauth, newpx, newtxsf, notespages, optidef, pas-cours, platex, prftree, pst-bezier, pst-circ, pst-eucl, pst-optic, pstricks, pstricks-add, refenums, reledmac, rsc, shdoc, siunitx, stackengine, tabstackengine, tagpair, tetex, texlive-es, texlive-scripts, ticket, translation-biblatex-de, tudscr, turabian-formatting, updmap-map, uplatex, xebaposter, xecjk, xepersian, xpinyin. Enjoy.

26 September 2016

Kees Cook: security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn t a lot of time to talk about them all, I figured I d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn t everything security-related or generally of interest, but they re the things I thought needed to be pointed out. If there s something security-related you think I should cover from v4.3, please mention it in the comments. I m sure I haven t caught everything. :) A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the kernel self-protection ideal anyway some things are just really interesting userspace-facing features. So, to that end, here are things I found interesting in v4.3: CONFIG_CPU_SW_DOMAIN_PAN Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow. This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink. While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I m delighted to see a form of this land in upstream finally. To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features. Ambient Capabilities Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex least privilege execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier. For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn t enough, there is a self-test available in tools/testing/selftests/capabilities/ too. PowerPC and Tile support for seccomp filter Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly. That s it for v4.3. If I missed stuff you found interesting, please let me know! I m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

19 September 2016

Mike Gabriel: Rocrail changed License to some dodgy non-free non-License

The Background Story A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+). A month ago, or so, I collected my old M rklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son. Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though). This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015. I should have moved on from here on... Instead... Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely. Going deeper... The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that. Trivial Source Code Investigation... So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:
Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.
The replacement happened with these Git commits:
commit cfee35f3ae5973e97a3d4b178f20eb69a916203e
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Fri Jul 17 16:09:45 2015 +0200
    update copyrights
commit df399d9d4be05799d4ae27984746c8b600adb20b
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 14:49:12 2015 +0200
    update licence
commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 10:17:13 2015 +0200
    update
Getting in touch again, still being really interested and wanting to help... As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:
Hi Rob,
I just stumbled over this post [3] [link reference adapted for this
blog post), which probably is the one you have referred to above.
It seems that Rocrail contains features that require a key or such
for permanent activation.  Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will  not
appreciate that). As the GPL states that people can share the source
code, programmers can  easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they  like.
Furthermore, the current COPYING file is really non-protective at
all. It does not really protect   you as copyright holder of the
code. Meaning, if people crash their trains with your software, you  
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for  having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.
In that referenced post above, someone also writes about the nuisance
of license discussions in  this forum. I have seen various cases
where people produced software and did not really care for 
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code  under their copyright holdership and their
own commercial licensing scheme. This is not paranoia,  this is what
happens in the Free Software world from time to time.
A model that might be much more appropriate (and more protective to
you as the author), maybe, is a  dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:  
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a  way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g.   MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition  would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you  may use any
non-free license you want (but please not that COPYING file you have
now, it really  does not protect your copyright holdership).
The reason for releasing (a reduced set of features of a) software as
Free Software is to extend  the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud,  GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am  sure that the user base of Rocrail will increase.
There may also be developers popping up showing  an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that  won't waste their free time on working for a non-free
piece of software (without being paid).
If you follow (or want to follow) a business model with Rocrail, then
keep some interesting  features in the Commercial Edition and don't
ship that source code. People with deep interest may  opt for that.
Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail  you are free to juggle with licenses and
apply any license to a release you want. For example, this  can be
interesing for a free-again Rocrail being shipped via Apple's iStore. 
Last but not least, as you ship the complete source code with all
previous changes as a Git project  to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In 
the mail I received with my GitBlit credentials, there was some text
that  prohibits publishing the  code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage.  This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still  GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the  2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want  this.
Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it  does not come with a free
license anymore. I won't continue this discussion and move on, unless
you  are interested in any of the above information and ask for more
expertise. Ping me here or directly  via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the 
expertise is offered free of charge ;-).
light+love
Mike
Wow, the first time I got moderated somewhere... What an experience! This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity... The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):
Mike,
I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.
(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really. Now the political part of this blog post... Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning. [Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another this time highly idiotic wind of change, where free speech may end up as a precious good.] To other (Debian) Package Maintainers and Railroad Enthusiasts... With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch. Now really moving on... Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier. I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely... References light+love,
Mike

19 August 2016

Norbert Preining: Debian/TeX Live 2016.20160819-1

A new and unplanned release in quick succession. I have uploaded testing packages to experimental which incorporate tex4ht into the TeX Live packages, but somehow the tex4ht transitional updated slipped into sid, and made many packages uninstallable. Well, so after a bit more testing let s ship the beast to sid, meaning that tex4ht will finally updated from the last 2009 version to what is the current status in TeX Live. texlive2016-debian From the list of new packages I want to pick out the group of phf* packages that seem from a quick reading over the package documentations as very interesting. But most important is the incorporation of tex4ht into the TeX Live packages, so please report bugs and shortcomings to the BTS. Thanks. New packages aurl, bxjalipsum, cormorantgaramond, notespages, phffullpagefigure, phfnote, phfparen, phfqit, phfquotetext, phfsvnwatermark, phfthm, table-fct, tocdata. Updated packages acmart, acro, biblatex-abnt, biblatex-publist, bxdpx-beamer, bxjscls, bxnewfont, bxpdfver, dccpaper, etex-pkg, europasscv, exsheets, glossaries-extra, graphics-def, graphics-pln, guitarchordschemes, ijsra, kpathsea, latexpand, latex-veryshortguide, ledmac, libertinust1math, markdown, mcf2graph, menukeys, mfirstuc, mhchem, mweights, newpx, newtx, optidef, paralist, parnotes, pdflatexpicscale, pgfplots, philosophersimprint, pstricks-add, showexpl, tasks, tetex, tex4ht, texlive-docindex, udesoftec, xcolor-solarized.

30 July 2016

Jose M. Calhariz: Enabling Wifi QCA9377 on a Asus E200HA

I bought a new laptop E200HA, because my previous was a MacBook and It broke after a fall into the ground. I let it boot first in Win10 to check if everything was OK and because I could not found the way to enter in the UEFI/BIOS. It is F2 and is edge triggered. It boots fast into Win10, but I got the feeling of being a little slow. No worries because I it bought for running Debian and because of the autonomy of the battery, 14hours playing music according to Asus. A little research if the new laptop could run Linux almost return no hits, but one very valuable link on how to setup the Wifi. So I got the feeling that I needed a Debian stretch CD for installation. So I download the first installation DVD from here. Run a trial of the DVD image using kvm
kvm -m 2047 -cdrom debian-stretch-DI-alpha7-amd64-DVD-1.iso
Found that the installer DVD now have the functionality of Live CD. This will be useful. Copy the image to a USB stick using dd command. I turned on the E200HA, entered into the UEFI/BIOS by pressing and releasing the F2 key. Turned off the secure boot and select USB storage for boot. The E200HA happily boot the Linux and I select the rescue mode. Using another USB stick of 32GB that was formatted in xfs, because of the lower slack for storing the inodes than ext3/4. In this USB stick I put a raw image of the internal storage of the E200HA, preserving this way the Win10. Another reboot, this time for installation of Debian stretch. It detected the lack of firmware files, for the WiFi adaptor. This link come very handy. The instructions are for an older Linux kernel. So I recommend doing something similar to the following commands:
git clone https://github.com/ajaybhatia/Qualcomm-Atheros-QCA9377-Wifi-Linux
cd Qualcomm-Atheros-QCA9377-Wifi-Linux/firmware-only
tar cvf QCA9377.tar QCA9377
Copy the tar file to a a second USB stick and connect it to the other USB port. This tar is not the files the Debian installer are expecting, so you need to change to the second console "Alt-F2", press enter to activate a shell, and do the following commands:
cd /lib/firmware
mkdir ath10k
mount /dev/sdb1 /mnt
cd ath10k
tar xf /mnt/QCA9377.tar
Return to the first console "Alt-F1" and continue with the installation. The list of missing firmware files is reduced and the WiFi can work. I had problems with the WiFI, but was because a neighbor router was on the same channel, since I changed the channel of my router the WiFi is working as a charm. The following links maybe useful in the future or as a reference: kvalo/ath10k-firmware kernel/git/firmware/linux-firmware.git

9 July 2016

Matthew Garrett: "I recieved a free or discounted product in return for an honest review"

My experiences with Amazon reviewing have been somewhat unusual. A review of a smart switch I wrote received enough attention that the vendor pulled the product from Amazon. At the time of writing, I'm ranked as around the 2750th best reviewer on Amazon despite having a total of 18 reviews. But the world of Amazon reviews is even stranger than that, and the past couple of weeks have given me some insight into it.

Amazon's success is fairly phenomenal. It's estimated that there's over 50 million people in the US paying $100 a year to get free shipping on Amazon purchases, and combined with Amazon's surprisingly customer friendly service there's a lot of people with a very strong preference for choosing Amazon rather than any other retailer. If you're not on Amazon, you're hurting your sales.

And if you're an established brand, this works pretty well. Some people will search for your product directly and buy it, leaving reviews. Well reviewed products appear higher up in search results, so people searching for an item type rather than a brand will still see your product appear early in the search results, in turn driving sales. Some proportion of those customers will leave reviews, which helps keep your product high up in the results. As long as your products aren't utterly dreadful, you'll probably maintain that position.

But if you're a brand nobody's ever heard of, things are more difficult. People are unlikely to search for your product directly, so you're relying on turning up in the results for more generic terms. But if you're selling a more generic kind of item (say, a Bluetooth smart bulb) then there's probably a number of other brands nobody's ever heard of selling almost identical objects. If there's no reason for anybody to choose your product then you're probably not going to get any reviews and you're not going to move up the search rankings. Even if your product is better than the competition, a small number of sales means a tiny number of reviews. By the time that number's large enough to matter, you're probably onto a new product cycle.

In summary: if nobody's ever heard of you, you need reviews but you're probably not getting any.

The old way of doing this was to send review samples to journalists, but nobody's going to run a comprehensive review of 3000 different USB cables and even if they did almost nobody would read it before making a decision on Amazon. You need Amazon reviews, but you're not getting any. The obvious solution is to send review samples to people who will leave Amazon reviews. This is where things start getting more dubious.

Amazon run a program called Vine which is intended to solve this problem. Send samples to Amazon and they'll distribute them to a subset of trusted reviewers. These reviewers write a review as normal, and Amazon tag the review with a "Vine Voice" badge which indicates to readers that the reviewer received the product for free. But participation in Vine is apparently expensive, and so there's a proliferation of sites like Snagshout or AMZ Review Trader that use a different model. There's no requirement that you be an existing trusted reviewer and the product probably isn't free. You sign up, choose a product, receive a discount code and buy it from Amazon. You then have a couple of weeks to leave a review, and if you fail to do so you'll lose access to the service. This is completely acceptable under Amazon's rules, which state "If you receive a free or discounted product in exchange for your review, you must clearly and conspicuously disclose that fact". So far, so reasonable.

In reality it's worse than that, with several opportunities to game the system. AMZ Review Trader makes it clear to sellers that they can choose reviewers based on past reviews, giving customers an incentive to leave good reviews in order to keep receiving discounted products. Some customers take full advantage of this, leaving a giant number of 5 star reviews for products they clearly haven't tested and then (presumably) reselling them. What's surprising is that this kind of cynicism works both ways. Some sellers provide two listings for the same product, the second being significantly more expensive than the first. They then offer an attractive discount for the more expensive listing in return for a review, taking it down to approximately the same price as the original item. Once the reviews are in, they can remove the first listing and drop the price of the second to the original price point.

The end result is a bunch of reviews that are nominally honest but are tied to perverse incentives. In effect, the overall star rating tells you almost nothing - you still need to actually read the reviews to gain any insight into whether the customer actually used the product. And when you do write an honest review that the seller doesn't like, they may engage in heavy handed tactics in an attempt to make the review go away.

It's hard to avoid the conclusion that Amazon's review model is broken, but it's not obvious how to fix it. When search ranking is tied to reviews, companies have a strong incentive to do whatever it takes to obtain positive reviews. What we're left with for now is having to laboriously click through a number of products to see whether their rankings come from thoughtful and detailed reviews or are just a mass of 5 star one liners.

comment count unavailable comments

29 June 2016

Paul Wise: DebCamp16 day 6

Redirect one person contacting the Debian sysadmin and web teams to Debian user support. Review wiki RecentChanges. Usual spam reporting. Check and fix a derivatives census issue. Suggest sending the titanpad maintainence issue to a wider audience. Update check-all-the-things and copyright review tools wiki page for licensecheck/devscripts split. Ask if debian-debug could be added to mirror.dc16.debconf.org. Discuss more about the devscripts/licensecheck split. Yesterday I grrred at Debian perl bug #588017 that causes vulnerabilities in check-all-the-things, tried to figure out the scope of the issue and workaround all of the issues I could find. (Perls are shiny and Check All The thingS can be abbreviated as cats) Today I confirmed with the reporter (Jakub Wilk) that the patch mitigates this. Release check-all-the-things to Debian unstable (finally!!). Discuss with the borg about syncing cats to Ubuntu. Notice autoconf/automake being installed as indirect cats build-deps (via debhelper/dh-autoreconf) and poke relevant folks about this. Answer question about alioth vs debian.org LDAP.

25 June 2016

Paul Wise: DebCamp16 day 2

Review wiki RecentChanges since my bookmark. Usual spam reporting. Mention microG on #debian-mobile. Answer pkg-config question on #debian-mentors. Suggest using UUIDs in response to a debian-arm query. Reported Debian bug #828103 against needrestart. A giant yellow SOS crane between the balcony hacklab and a truly misty city. Locate the 2014 Debian & stuff podcast on archive.org. Poke the SPARC porters in response to a suggestion on debian-www. Mention systemctl daemon-reload wrt buildd service changes. Automate updating some extension lists from check-all-the-things. Reported wishlist Debian bug #828128 against debsources. Engage lizard mode! Wish for better display technology. Nice vegetarian food with nice folks and interesting discussions with interesting locals. Polish and release check-all-the-things. Close bugs I forgot to close in the changelog. Add link to debian-boot on Debootstrap wiki page. Notice first mockup of a theme for Debian stretch. Answer a question about package naming on #debian-mentors. Discuss the future of cross compilation on Debian. Notice a talk about FOSSology & update a wiki page. Mention AsteroidOS and MaruOS on the mobile wiki page. Contemplate how close to the FSDG Debian might be and approaches to improving that.

18 April 2016

Norbert Preining: TeX Live 2016 pretest and Debian packages

Preparation for the release of TeX Live 2016 have started some time ago with the freeze of updates in TeX Live 2015. Yesterday we announced the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs. At the same time I have uploaded the first set of packages of TeX Live 2016 for Debian to the experimental suite. texlive-2016-debian-pretest Concerning the binaries we do expect a few further changes, but hopefully nothing drastic. The most invasive change on the tlmgr side is that cryptographic signatures are now verified to guarantee authenticity of the packages downloaded, but this is rather irrelevant for Debian users (though I will look into how that works in user mode). Other than that, many packages have been updated or added since the last Debian packages, here is the unified list: acro, animate, appendixnumberbeamer, arabluatex, asapsym, asciilist, babel-belarusian, bibarts, biblatex-bookinarticle, biblatex-bookinother, biblatex-caspervector, biblatex-chicago, biblatex-gost, biblatex-ieee, biblatex-morenames, biblatex-opcit-booktitle, bibtexperllibs, bxdvidriver, bxenclose, bxjscls, bxnewfont, bxpapersize, chemnum, cjk-ko, cochineal, csplain, cstex, datetime2-finnish, denisbdoc, dtx, dvipdfmx-def, ejpecp, emisa, fithesis, fnpct, font-change-xetex, forest, formation-latex-ul, gregoriotex, gzt, hausarbeit-jura, hyperxmp, imakeidx, jacow, l3, l3kernel, l3packages, latex2e, latex2e-help-texinfo-fr, latex-bib2-ex, libertinust1math, lollipop, lt3graph, lua-check-hyphen, lualibs, luamplib, luatexja, mathalfa, mathastext, mcf2graph, media9, metrix, nameauth, ndsu-thesis, newtx, normalcolor, noto, nucleardata, nwejm, ocgx2, pdfcomment, pdfpages, pkuthss, polyglossia, proposal, qcircuit, reledmac, rmathbr, savetrees, scanpages, stex, suftesi, svrsymbols, teubner, tex4ebook, tex-ini-files, tikzmark, tikzsymbols, titlesec, tudscr, typed-checklist, ulthese, visualtikz, xespotcolor, xetex-def, xetexko, ycbook, yinit-otf. Enjoy.

Next.

Previous.